我们解决了LIDAR的数据增强问题。给出了从某个位置的场景的激光雷达扫描,如何从不同的次要位置模拟那场景的新扫描?该方法定义了选择有效辅助位置的标准,然后估计来自这些位置的扫描仪获取来自原始点云的点。我们使用合成场景验证该方法,并检查产生的点云的相似性如何取决于扫描仪距离,遮挡和角度分辨率。我们表明该方法在短距离中更准确,并且对于原始点云具有高扫描仪分辨率对产生的点云的相似性有很大影响。我们还展示了该方法如何应用于自然场景统计数据:特别是,我们应用我们的方法来水平和垂直地重新定位扫描仪,分别考虑属于地面的点和非接地物体,并描述了对分布的影响距离这两类点的距离。
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
We consider the problem of finding an accurate representation of neuron shapes, extracting sub-cellular features, and classifying neurons based on neuron shapes. In neuroscience research, the skeleton representation is often used as a compact and abstract representation of neuron shapes. However, existing methods are limited to getting and analyzing "curve" skeletons which can only be applied for tubular shapes. This paper presents a 3D neuron morphology analysis method for more general and complex neuron shapes. First, we introduce the concept of skeleton mesh to represent general neuron shapes and propose a novel method for computing mesh representations from 3D surface point clouds. A skeleton graph is then obtained from skeleton mesh and is used to extract sub-cellular features. Finally, an unsupervised learning method is used to embed the skeleton graph for neuron classification. Extensive experiment results are provided and demonstrate the robustness of our method to analyze neuron morphology.
translated by 谷歌翻译
SchNetPack is a versatile neural networks toolbox that addresses both the requirements of method development and application of atomistic machine learning. Version 2.0 comes with an improved data pipeline, modules for equivariant neural networks as well as a PyTorch implementation of molecular dynamics. An optional integration with PyTorch Lightning and the Hydra configuration framework powers a flexible command-line interface. This makes SchNetPack 2.0 easily extendable with custom code and ready for complex training task such as generation of 3d molecular structures.
translated by 谷歌翻译
The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we: - announce the first public release of the astroBERT language model; - show how astroBERT improves over existing public language models on astrophysics specific tasks; - and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Although prediction models for delirium, a commonly occurring condition during general hospitalization or post-surgery, have not gained huge popularity, their algorithmic bias evaluation is crucial due to the existing association between social determinants of health and delirium risk. In this context, using MIMIC-III and another academic hospital dataset, we present some initial experimental evidence showing how sociodemographic features such as sex and race can impact the model performance across subgroups. With this work, our intent is to initiate a discussion about the intersectionality effects of old age, race and socioeconomic factors on the early-stage detection and prevention of delirium using ML.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
我们提供了证据表明,学到的密度功能理论(``dft')的力场已准备好进行基态催化剂发现。我们的关键发现是,尽管预测的力与地面真相有很大差异,但使用从超过50 \%的评估系统中使用RPBE功能的能量与使用RPBE功能相似或较低能量的力量的力量与使用RPBE功能相似或较低的力量放松。这具有令人惊讶的含义,即学习的潜力可能已经准备好在挑战性的催化系统中替换DFT,例如在Open Catalyst 2020数据集中发现的电位。此外,我们表明,在局部谐波能量表面上具有与目标DFT能量相同的局部谐波能量表面训练的力场也能够在50 \%的情况下找到较低或相似的能量结构。与在真实能量和力量训练的标准模型相比,这种``简易电位''的收敛步骤更少,这进一步加速了计算。它的成功说明了一个关键:即使模型具有高力误差,学到的电位也可以定位能量最小值。结构优化的主要要求仅仅是学到的电位具有正确的最小值。由于学到的电位与系统大小的速度快速且尺寸为线性,因此我们的结果开辟了快速找到大型系统基础状态的可能性。
translated by 谷歌翻译
2型糖尿病(T2DM)的早期诊断对于及时的治疗干预措施和生活方式改变至关重要。随着医学成像数据在许多患者群体中变得更广泛可用,我们试图研究是否可以在表格学习分类器模型中利用图像衍生的表型数据来预测T2DM的发病率,而无需使用侵入性血液实验室测量。我们表明,使用图像衍生表型的神经网络和决策树模型都可以预测患者T2DM状态的召回评分高达87.6%。我们还提出了与“ Syntha1c编码器”相同的结构的新颖使用,这些结构能够输出模仿血液血红蛋白A1C经验实验室测量值的可解释值。最后,我们证明了T2DM风险预测模型对输入矢量成分中小扰动的敏感性可用于预测从以前看不见的患者人群中取样的协变量的性能。
translated by 谷歌翻译